Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Multi-label feature selection based on label-specific feature with missing labels
ZHANG Zhihao, LIN Yaojin, LU Shun, GUO Chen, WANG Chenxi
Journal of Computer Applications    2021, 41 (10): 2849-2857.   DOI: 10.11772/j.issn.1001-9081.2020111893
Abstract297)      PDF (1049KB)(217)       Save
Multi-label feature selection has been widely used in many domains, such as image classification and disease diagnosis. However, there usually exist missing labels in the label space of data in practice, which destroys the structure and correlation between labels, so that the learning algorithms are difficult to exactly select important features. To address this problem, a Multi-label Feature Selection based on Label-specific feature with Missing Labels (MFSLML) algorithm was proposed. Firstly, the label-specific feature for each class label was obtained via sparse learning method. At the same time, the mapping relations between labels and label-specific features were constructed based on linear regression model, and were used to recover the missing labels. Finally, experiments were performed on 7 datasets with using 4 evaluation metrics. Experimental results show that compared to some state-of-the-art multi-label feature selection algorithms, such as multi-label feature selection algorithm based Max-Dependency and Min-Redundancy (MDMR) and the Multi-label Feature selection with Missing Labels via considering feature interaction (MFML), MFSLML can increase the average precision by 4.61-5.5 percentage points. It can be seen that MFSLML achieves better classification performance.
Reference | Related Articles | Metrics
Automatic annotation of visual deep neural network
LI Ming, GUO Chenhao, CHEN Xing
Journal of Computer Applications    2020, 40 (6): 1593-1600.   DOI: 10.11772/j.issn.1001-9081.2019101774
Abstract304)      PDF (3594KB)(340)       Save
Focused on the issue that developers cannot quickly figure out the models they need from various models, an automatic annotation method of visual deep neural network based on natural language processing technology was proposed. Firstly, the field categories of visual neural networks were divided, the keywords and corresponding weights were calculated according to the word frequency and other information. Secondly, a keyword extractor was established to extract keywords from paper abstracts. Finally, the similarities between extracted keywords and the known weights were calculated in order to obtain the application fields of a specific model. With experimental data derived from the papers published in three top international conferences of computer vision: IEEE International Conference on Computer Vision(ICCV), IEEE Conference on Computer Vision and Pattern Recognition(CVPR) and European Conference on Computer Vision(ECCV), the experiments were carried out. The experimental results indicate that the proposed method provides highly accurate classification results with a macro average value of 0.89. The validity of this proposed method is verified.
Reference | Related Articles | Metrics
A new compressed vertex chain code
WEI Wei, DUAN Xiaodong, LIU Yongkui, GUO Chen
Journal of Computer Applications    2017, 37 (6): 1747-1752.   DOI: 10.11772/j.issn.1001-9081.2017.06.1747
Abstract553)      PDF (940KB)(437)       Save
Chain code is one kind of coding technology, which can represent the line, curve and region boundary with small data storage. In order to improve the compression efficiency of chain code, a new compression vertex chain code named Improved Orthogonal 3-Direction Vertex Chain Code (IO3DVCC) was proposed. The statistical characteristic of the Vertex Chain Code (VCC) and the directional characteristic of the OrThogonal 3-direction chain code (3OT) were combined in the proposed new chain code, 5 code values were totally set. The combination of 1, 3 and the combination of 3, 1 in VCC were merged and expressed by code 1. The expression of the code 2 was the same with the corresponding code value of VCC. The expression of code 3 was the same as the code value 2 of 3OT. Code 4 and code 5 corresponded to the two continuous code value 1 of IO3DVCC and eight continuous code values 2 of VCC respectively. Based on Huffman coding, the new chain code was the indefinite length coding. The code value probability, average expression ability, average length and efficiency of IO3DVCC, Enhanced Relative 8-Direction Freeman Chain Code (ERD8FCC), Arithmetic encoding Variable-length Relative 4-direction Freeman chain code (AVRF4), Arithmetic coding applied to 3OT chain code (Arith_3OT), Compressed VCC (CVCC), and Improved CVCC (ICVCC) were calculated aiming at the contour boundary of 100 images. The experimental results show that the efficiency of I3ODVCC is the highest. The total code number, total binary bit number, and compression ratio relative to the 8-Direction Freeman Chain Code (8DFCC) of three kinds of chain codes including IO3DVCC, Arith_3OT, and ICVCC were calculated aiming at the contour boundary of 20 randomly selected images. The experimental results demonstrate that the compression effect of IO3DVCC is the best.
Reference | Related Articles | Metrics
Unambiguous capture model for binary offset carrier modulated signals based on correlation function
OU Zhengbao, GUO Chengjun
Journal of Computer Applications    2016, 36 (6): 1496-1501.   DOI: 10.11772/j.issn.1001-9081.2016.06.1496
Abstract610)      PDF (738KB)(347)       Save
The capture of Binary Offset Carrier (BOC) modulated signals is ambiguous. In order to solve the problem, a novel algorithm named decompose-compose based on the local BOC signals was proposed. Firstly, the local subcarrier signal was decomposed according to the order n of the local BOC signal. Secondly, the 2 n subfunctions of BOC signal were gotten by multiplying the pseudo-random code with the decomposed functions gotten from the first step. Then the 2 n subfunctions were respectively used to make correlation with the received BOC signals for obtaining the 2 n cross-correlation functions. Finally, the 2 n cross-correlation functions gotten from the last step were further processed according to the decompose-compose algorithm. The theoretical analysis and the simulation results show that, compared with Offset Quadratic Cross-Correlation (OQCC) algorithm, the proposed decompose-compose algorithm had the improvement of 21.51 dB and 3.4 dB in the Amplitude Separation Degree of Main and Side Peaks(ASDMSP) when the signals of BOC(1, 1) and BOC(2, 1) were captured. The experimental results show that the decompose-compose algorithm can effectively solve the ambiguous problem of BOC signals.
Reference | Related Articles | Metrics
Multipath error of deep coupling system based on integrity
LIU Linlin, GUO Chengjun, TIAN Zhong
Journal of Computer Applications    2016, 36 (3): 610-615.   DOI: 10.11772/j.issn.1001-9081.2016.03.610
Abstract528)      PDF (885KB)(352)       Save
Focused on the elimination of multipath error in Global Positioning System (GPS), a multipath error elimination method based on the combination of integrity and deep coupling structure was proposed. Firstly, the GPS and Strapdown Inertial Navigation System (SINS) were constructed into a deep coupling structure. Then, pseudorange residual and pseudorange rate residual which came from the output of phase frequency detector were used as the test statistics. Secondly, according to that pseudorange residual and pseudorange rate residual subjected to Gaussian distribution, the detection threshold of pseudorange residual and pseudorange rate residual was calculated. Finally, the detection threshold was used to evaluate the test statistics, and the modified pseudorange residual and pseudorange rate residual were put into the Kalman filter. In the simulation comparison of the proposed method with the multipath error elimination method without integrity: the latitude error decreased by about 40 m, the yaw angle error decreased by about 4 degrees, the north velocity error decreased by about 2 m/s. In contrast to the traditional method of eliminating multipath errors (using wavelet filtering): the height error decreased by about 40 m, the pitch angle error decreased by about 5 degrees. The simulation results show that the proposed method based on integrity can effectively eliminate the positioning error caused by multipath (reflected in position error, attitude angle error and velocity error). Meanwhile, comparing to the traditional filtering method, it can more effectively reduce the positioning error caused by multipath.
Reference | Related Articles | Metrics
Improved compression vertex chain code based on Huffman coding
WEI Wei LIU Yongkui DUAN Xiaodong GUO Chen
Journal of Computer Applications    2014, 34 (12): 3565-3569.  
Abstract203)      PDF (795KB)(595)       Save

This paper introduced the research works on all kinds of chain code used in image processing and pattern recognition and a new chain code named Improved Compressed Vertex Chain Code (ICVCC) was proposed based on Compressed Vertex Chain Code (CVCC). ICVCC added one code value compared with CVCC and adopted Huffman coding to encode each code value to achieve a set of chain code with unequal length. The expression ability per code, average length and efficiency as well as compression ratio with respect to 8-Directions Freeman Chain Code (8DFCC) were calculated respectively through the statistis a large number of images. The experimental results show that the efficiency of ICVCC proposed this paper is the highest and compression ratio is ideal.

Reference | Related Articles | Metrics
Image denoising algorithm using fractional-order integral with edge compensation
HUANG Guo CHEN Qingli XU Li MEN Tao PU Yifei
Journal of Computer Applications    2014, 34 (10): 2957-2962.   DOI: 10.11772/j.issn.1001-9081.2014.10.2957
Abstract211)      PDF (1008KB)(378)       Save

To solve the problem of losing edge and texture information in the existing image denoising algorithms based on fractional-order integral, an image denoising algorithm using fractional-order integral with edge compensation was presented. The fractional-order integral operator has the performance of sharp low-pass. The Cauchy integral formula was introduced into digital image denoising, and the image numerical calculation of fractional-order integral was achieved by the method of slope approximation. In the process of iterative denoising, the algorithm built denoising mask by setting higher tiny fractional-order integral order at the rising stage of image Signal-to-Noise Ratio (SNR); and the algorithm built denoising mask by setting lower small fractional-order integral order at the declining stage of image SNR. Additionally, it could partially restore the image edge and texture information by the mechanism of edge compensation. The image denoising algorithm using fractional-order integral proposed in this paper makes use of different strategies of the fractional-order integral order and edge compensation mechanism in the process of iterative denoising. The experimental results show that compared with traditional denoising algorithm, the denoising algorithm proposed in this paper can remove the noise to obtain higher SNR and better visual effect while appropriately restoring the edge and texture information of image.

Reference | Related Articles | Metrics
Multi-round vote location verification mechanism based on weight and difference value in vehicular Ad Hoc network
WANG Xueyin FENG Jianguo CHEN Jiawei ZHANG Fang XUE Xiaoping
Journal of Computer Applications    2014, 34 (10): 2771-2776.   DOI: 10.11772/j.issn.1001-9081.2014.10.2771
Abstract264)      PDF (851KB)(856)       Save

To solve the problem of location verification caused by collusion attack in Vehicular Ad Hoc NETworks (VANET), a multi-round vote location verification based on weight and difference was proposed. In the mechanism, a static frame was introduced and the Beacon messages format was redesigned to alleviate the time delay of location verification. By setting malicious vehicles filtering process, the position of the specific region was voted by the neighbors with different degrees of trust, which could obtain credible position verification. The experimental results illustrate that in the case of collusion attack, the scheme achieves a higher accuracy of 93.4% compared to Minimum Mean Square Estimation (MMSE) based location verification mechanism.

Reference | Related Articles | Metrics
Robustness analysis of unstructured P2P botnet
XU Xiao-dong Jian-guo CHENG ZHU Shi-rui
Journal of Computer Applications    2011, 31 (12): 3343-3345.  
Abstract920)      PDF (458KB)(694)       Save
Constant improvement of botnet structure has caused great threat to network security, so it is very important to study the inherent characteristics of botnet structure to defense this kind of attack. This paper simulated the unstructured P2P botnet from the perspective of complex network, then proposed metrics and applied the theory of complex centrality to analyze the robustness of the unstructured P2P botnet when it encountered nodes failure. The experimental results demonstrate that the unstructured P2P botnet displays high robustness when it encounters random nodes failure, but its robustness drops quickly when it encounters central nodes failure.
Related Articles | Metrics
Environmental perception and the adaptive research in moving object detection
Yan ZHANG Ji-chang GUO Chen WANG
Journal of Computer Applications    2011, 31 (07): 1827-1830.  
Abstract1141)      PDF (640KB)(894)       Save
In complicated environment, any changes will influence the accuracy of the object detection. Therefore, an algorithm was put forward, which combined the Generalized Gaussian Mixture Model (GGMM) and background subtraction to detect moving objects. The model has a flexibility to perceive environment and model the video background adaptively in the presence of environmental changes (such as radial gradient, background disturbance, shadows and noise). And when it has sudden illumination change,the model can resolve it quickly. In order to meet the realtime requirement, this algorithm adopted the principle, to update every other two frames. The experiments show that it can meet the realtime requirement and detect the moving object accurately.
Reference | Related Articles | Metrics
New group search optimizer algorithm based on chaotic searching
FANG Zhen-guo CHEN De-bao
Journal of Computer Applications    2011, 31 (03): 657-659.   DOI: 10.3724/SP.J.1087.2011.00657
Abstract1336)      PDF (570KB)(926)       Save
To improve the performance of Group Search Optimizer (GSO), a new group search optimizer algorithm based on Chaotic Group Search Optimizer (CGSO) in combination with the global searching characteristic of the chaos method was proposed in the paper. In the method, the good position of producer was updated by chaotic searching, the new position of scrounger was determined by the position of producer and the best position which it had been achieved so far, and the new position of rangers was achieved by chaotic mutation. The global convergent performance of GSO was improved by using the initial sensitivity of the Logistic map to expand the scope of the search and by employing the global ergodicity to search the positions. Four function optimization problems were simulated by CGSO and GSO. The experimental results indicate that CGSO is more effective than the others.
Related Articles | Metrics
Strategy of generating Web Server’s responding workload
HUO Li-ping, GUO Cheng-cheng
Journal of Computer Applications    2005, 25 (06): 1458-1460.   DOI: 10.3724/SP.J.1087.2005.01458
Abstract883)      PDF (140KB)(967)       Save
The characterizing analysis of workload generated by Web server’s responding is important for Web server performance evaluation. Characteristic of Web server’s responding and proper workload help for understanding how server and networks respond to variation load. This paper refered to a strategy of generating responding workload file of Web Server.
Related Articles | Metrics